7 research outputs found

    Brain-Inspired Intelligent Systems for Daily Assistance

    Get PDF
    The fields of machine learning and cognitive computing have been in the last decade revolutionised with neural-inspired algorithms (e.g., deep ANNs and deep RL) and brain-intelligent systems that assist in many real-world learning tasks from robot monitoring and interaction at home to complex decision-making about emotions and behaviours in humans and animals. While there are remarkable advances in these brain-inspired algorithms and systems, they need to be trained with huge data sets, and their results lack flexibility to adapt to diverse learning tasks and sustainable performance over long periods of time. To address these challenges, it is essential to gain an analytical understanding of the principles that allow biological inspired intelligent systems to leverage knowledge and how they can be translated to hardware for daily assistance and practical applications. This special issue brings researchers from interdesciplinary domains to report their latest research work on algorithms and neural-inspired systems that flexibly adapt to new learning tasks, learn from the environment using multimodal signals (e.g., neural, physiological, and kinematic), and produce autonomous adaptive agencies, which utilise cognitive and affective data, within a social neuroscientific framework. In this special issue, we have selected five papers out of fourteen high-quality papers after a careful reviewing process, which brings the acceptance rate to 35.7 percent. The five papers are representative of the current state-of-the-art in this area

    An End-to-End Automated License Plate Recognition System Using YOLO Based Vehicle and License Plate Detection with Vehicle Classification

    Get PDF
    An accurate and robust Automatic License Plate Recognition (ALPR) method proves surprising versatility in an Intelligent Transportation and Surveillance (ITS) system. However, most of the existing approaches often use prior knowledge or fixed pre-and-post processing rules and are thus limited by poor generalization in complex real-life conditions. In this paper, we leverage a YOLO-based end-to-end generic ALPR pipeline for vehicle detection (VD), license plate (LP) detection and recognition without exploiting prior knowledge or additional steps in inference. We assess the whole ALPR pipeline, starting from vehicle detection to the LP recognition stage, including a vehicle classifier for emergency vehicles and heavy trucks. We used YOLO v2 in the initial stage of the pipeline and remaining stages are based on the state-of-the-art YOLO v4 detector with various data augmentation and generation techniques to obtain LP recognition accuracy on par with current proposed methods. To evaluate our approach, we used five public datasets from different regions, and we achieved an average recognition accuracy of 90.3% while maintaining an acceptable frames per second (FPS) on a low-end GPU

    Evaluation of different chrominance models in the detection and reconstruction of faces and hands using the growing neural gas network

    Get PDF
    Physical traits such as the shape of the hand and face can be used for human recognition and identification in video surveillance systems and in biometric authentication smart card systems, as well as in personal health care. However, the accuracy of such systems suffers from illumination changes, unpredictability, and variability in appearance (e.g. occluded faces or hands, cluttered backgrounds, etc.). This work evaluates different statistical and chrominance models in different environments with increasingly cluttered backgrounds where changes in lighting are common and with no occlusions applied, in order to get a reliable neural network reconstruction of faces and hands, without taking into account the structural and temporal kinematics of the hands. First a statistical model is used for skin colour segmentation to roughly locate hands and faces. Then a neural network is used to reconstruct in 3D the hands and faces. For the filtering and the reconstruction we have used the growing neural gas algorithm which can preserve the topology of an object without restarting the learning process. Experiments conducted on our own database but also on four benchmark databases (Stirling’s, Alicante, Essex, and Stegmann’s) and on deaf individuals from normal 2D videos are freely available on the BSL signbank dataset. Results demonstrate the validity of our system to solve problems of face and hand segmentation and reconstruction under different environmental conditions

    Super-Resolution Convolutional Network for Image Quality Enhancement in Remote Photoplethysmography based Heart Rate Estimation

    No full text
    Heart rate (HR) is one of the important vital parameters of the human body and understanding this vital sign provides key insights into human wellness. Imaging photoplethysmography (iPPG) allows HR detection from video recordings and its unbeatable compliance over the state of art methods has made much attention among researchers. Since it is a camera-based technique, measurement accuracy depends on the quality of input images. In this paper, we present a pipeline for efficient measurement of HR that includes a learning-based super-resolution preprocessing step. This preprocessing image enhancement step has shown promising results on low-resolution input images and works better on iPPG algorithms. The experimental results verified the reliability of this method

    Classification of Uterine Fibroids in Ultrasound Images Using Deep Learning Model

    No full text
    An abnormal growth develop in female uterus is uterus fibroids. Sometimes these fibroids may cause severe problems like miscarriage. If this fibroids are not detected it ultimately grows in size and numbers. Among different image modalities, ultrasound is more efficient to detect uterus fibroids. This paper proposes a model in deep learning for fibroid detection with many advantages. The proposed deep learning model overpowers the drawbacks of the existing methodologies of fibroid detection in all stages like noise removal, contrast enhancement, Classification. The preprocessed image is classified into two classes of data: fibroid and non-fibroid, which is done using the MBFCDNN method. The method is validated using the parameters Sensitivity, specificity, accuracy, precision, F-measure. It is found that the sensitivity is 94.44%, specificity 95 % and accuracy 94.736%

    Editorial: Explanation in Human-AI Systems

    No full text
    The answer to the question "what is a good explanation for lay users" becomes more challenging within a broader, multi-disciplinary context, as the one of our call, from philosophy to sociology, economics and computer sciences. In this context, XAI should entail another level of discussions that need to be addressed: our relationship, as humans, to AI systems, in general. Nonetheless, the term 'explain' derives from the Latin verb 'explanare', which means, literally, 'to make level'. Thus, the discussions and scientific endeavours into XAI could entail the notion of ourselves 'making level' with AI systems, which again brings us to the old question as to our relationship to and with AI systems. More specifically, bringing in our multifaceted cultural notions of AI and human beings, for instance, in terms of master and servant, or who is and will be dominating whom (e.g., Space Odyssey's HAL).Arguably, the intertwined cultural perceptions and (often incorrect, albeit popular) ideas regarding AI and its potentials ultimately also influence lay persons perceived needs for explanation when interacting with AI systems. For example, when interacting with a social robot, ideally, we should not have any perceived need for explanation -or only as much or little as we would have when interacting with any other social, human companion. Given that we are seeing a machine, however, brings up expectations and impressions formed by popular culture, and thus the need for explanation to, maybe, satisfy a need for safety, trust, etc.Therefore, answering the research question "what is a good explanation" is far from obvious. Seeking answers to this research question has been the main incentive for the launch of this research topic

    Transfer Learning based Natural Scene Classification for Scene Understanding by Intelligent Machines

    No full text
    Scene classification carry out an imperative accountability in the current emerging field of automation. Traditional classification methods endure with tedious processing techniques. With the advent of CNN and deep learning models have greatly accelerated the job of scene classification. In our paper we have considered an area of application where the deep learning can be used to assist in the civil and military applications and aid in navigation. Current image classifications concentrate on the various available labeled datasets of various images. This work concentrates on classification of few scenes that contain pictures of people and places that are affected in the areas of flood. This aims at assisting the rescue officials at the need of natural calamities, disasters, military attacks etc. Proposed work explains a classifying system which can categorize the small scene dataset using transfer learning approach. We collected the pictures of scenes from sites and created a small dataset with different flood affected activities. We have utilized transfer learning model, RESNET in our proposed work which showed an accuracy of 88.88% for ResNet50 and 91.04% for ResNet101 and endow with a faster and economical revelation for the application involved
    corecore